Approximate Kalman-Bucy Filter for Continuous-Time Semi-Markov Jump Linear Systems

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robust H2 control of continuous-time Markov jump linear systems

This paper is concerned with the problem of designing robust H2 state-feedback controllers for continuous-time Markov jump linear systems subject to polytopic-type parameter uncertainty. Based on the parameter-dependent Lyapunov function approach, a new method for designing robust H2 controllers is presented in terms of solutions to a set of linear matrix inequalities. A numerical example is gi...

متن کامل

Stabilization of continuous-time jump linear systems

In this paper, we investigate almost-sure and moment stabilization of continuous time jump linear systems with a finite-state Markov jump form process. We first clarify the concepts of -moment stabilizability, exponential -moment stabilizability, and stochastic -moment stabilizability. We then present results on the relationships among these concepts. Coupled Riccati equations that provide nece...

متن کامل

Estimation of Change Point via Kalman-bucy Filter for Linear Systems Driven by Fractional Brownian Motions

We study the estimation of change point obtained through a Kalman-Bucy filter for linear systems driven by fractional Brownian motions.

متن کامل

A receding horizon Kalman FIR filter for linear continuous-time systems

A receding horizon Kalman finite-impulse response (FIR) filter is suggested for continuous-time systems, combining the Kalman filter with the receding horizon strategy. In the suggested filter, the horizon initial state is assumed to be unknown. It can always be obtained irrespective of unknown information on the horizon initial state. The filter may be the first stochastic FIR form for continu...

متن کامل

Approximate Kalman Filter Q-Learning for Continuous State-Space MDPs

We seek to learn an effective policy for a Markov Decision Process (MDP) with continuous states via Q-Learning. Given a set of basis functions over state action pairs we search for a corresponding set of linear weights that minimizes the mean Bellman residual. Our algorithm uses a Kalman filter model to estimate those weights and we have developed a simpler approximate Kalman filter model that ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2016

ISSN: 0018-9286,1558-2523,2334-3303

DOI: 10.1109/tac.2015.2495578